5 research outputs found

    6-gene promoter methylation assay is potentially applicable for prostate cancer clinical staging based on urine collection following prostatic massage

    Get PDF
    The detection of prostate cancer (PCa) biomarkers in bodily fluids, a process known as liquid biopsy, is a promising approach and particularly beneficial when performed in urine samples due to their maximal non‑invasiveness requirement of collection. A number of gene panels proposed for this purpose have allowed discrimination between disease‑free prostate and PCa; however, they bear no significant prognostic value. With the purpose to develop a gene panel for PCa diagnosis and prognosis, the methylation status of 17 cancer-associated genes were analyzed in urine cell‑free DNA obtained from 31 patients with PCa and 33 control individuals using methylation‑specific polymerase chain reaction (MSP). Among these, 13 genes indicated the increase in methylation frequency in patients with PCa compared with controls. No prior association has been reported between adenomatosis polyposis coli 2 (APC2), homeobox A9, Wnt family member 7A (WNT7A) and N‑Myc downstream‑regulated gene 4 protein genes with PCa. The 6‑gene panel consisting of APC2, cadherin 1, forkhead box P1, leucine rich repeat containing 3B, WNT7A and zinc family protein of the cerebellum 4 was subsequently developed providing PCa detection with 78% sensitivity and 100% specificity. The number of genes methylated (NGM) value introduced for this panel was indicated to rise monotonically from 0.27 in control individuals to 4.6 and 4.25 in patients with highly developed and metastatic T2/T3 stage cancer, respectively. Therefore, the approach of defining the NGM value may not only allow for the detection of PCa, but also provide a rough evaluation of tumor malignancy and metastatic potential by non‑invasive MSP analysis of urine samples

    Geospatial data analysis in Russia’s geoweb

    Get PDF
    The chapter examines the role of geospatial data in Russia’s online ecosystem. Facilitated by the rise of geographic information systems and user-generated content, the distribution of geospatial data has blurred the line between physical spaces and their virtual representations. The chapter discusses different sources of these data available for Digital Russian Studies (e.g., social data and crowdsourced databases) together with the novel techniques for extracting geolocation from various data formats (e.g., textual documents and images). It also scrutinizes different ways of using these data, varying from mapping the spatial distribution of social and political phenomena to investigating the use of geotag data for cultural practices’ digitization to exploring the use of geoweb for narrating individual and collective identities online

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting
    corecore